perm filename PANIC.SOS[4,KMC]3 blob
sn#065991 filedate 1973-10-10 generic text, type T, neo UTF8
00100 EXPLANATIONS AND MODELS
00200 The Nature of Explanation
00300 It is perhaps as difficult to explain explanation itself as
00400 it is to explain anything else. (Nothing, except everything, explains
00500 anything). The explanatory practices of different sciences differ
00600 widely but they all share the purpose of someone attempting to answer
00700 someone else's (or his own) why-how-what-etc. questions about a
00800 situation, event, episode, object or phenomenon. Thus explanation
00900 implies a dialogue whose participants share some interests, beliefs,
01000 and values. A consensus must exist about what are admissable and
01100 appropriate questions and answers. The participants must agree on
01200 what is a sound and reasonable question and what is a relevant,
01300 intelligible, and (believed) correct answer. The explainer tries to
01400 satisfy a questioner's curiosity by making comprehensible why
01500 something is the way it is. The answer may be a definition, an
01600 example, a synonym, a story, a theory, a model-description, etc. The
01700 answer attempts to satisfy curiosity by settling belief. A scientific
01800 explanation aims at convergence of belief in the relevant expert
01900 community.
02000 Suppose a man dies and a questioner (Q) asks an explainer (E):
02100 Q: Why did the man die?
02200 One answer might be:
02300 E: Because he took cyanide.
02400 This explanation might be sufficient to satisfy Q's curiosity and he
02500 and he stops asking further questions. Or he might continue:
02600 Q. Why did the cyanide kill him?
02700 and E replies:
02800 E: Anyone who ingests cyanide dies.
02900 This explanation appeals to a universal generalization under which is
03000 subsumed the particular fact of this man's death. Subsumptive
03100 explanations satisfy some questioners but not others who, for
03200 example, might want to know about the physiological mechanisms
03300 involved.
03400 Q: How does cyanide work in causing death?
03500 E: It stops respiration so the person dies from lack of oxygen.
03600 If Q has biochemical interests he might inquire further:
03700 Q:What is cyanide's mechanism of drug action on the
03800 respiratory center?
03900 The last two questions refers to causes. When human action is
04000 to be explained, confusion easily arises between appealing to
04100 physical, mechanical causes and appealing to symbolic-level reasons,
04200 that is, learned, acquired procedures or strategies which seem to be
04300 of a different ontological order. (See Toulmin, 1971).
04400 It is established clinical knowledge that the phenomena of
04500 the paranoid mode can be found associated with a variety of physical
04600 disorders. For example, paranoid thinking can be found in patients
04700 with head injuries, hyperthyroidism, hypothyroidism, uremia,
04800 pernicious anemia, cerebral arteriosclerosis, congestive heart
04900 failure, malaria and epilepsy. Also drug intoxications due to
05000 alcohol, amphetamines, marihuana and LSD can be accompanied by the
05100 paranoid mode. In these cases the paranoid mode is not a first-order
05200 disorder but a way of processing information in reaction to some
05300 other underlying disorder. To account for the association of paranoid
05400 thought with these physical states of illness, a psychological
05500 theorist might be tempted to hypothesize that a purposive cognitive
05600 system would attempt to explain ill health by attributing it to other
05700 malevolent human agents. But before making such an explanatory move,
05800 we must consider the at-times elusive distinction between reasons and
05900 causes in explanations of human behavior.
06000 One view of the association of the paranoid mode with
06100 physical disorders might be that the physical illness simply causes
06200 the paranoia ,through some unknown mechanism, at a physical level
06300 beyond the influence of deliberate self-direction and self-control.
06400 That is, the resultant paranoid mode represents something that
06500 happens to a person as victim, not something that he does as an
06600 active agent. Mechanical causes thus provide one type of reason in
06700 explaining behavior. Another view is that the paranoid mode can be
06800 explained in terms of symbolically represented reasons consisting of
06900 rules and patterns of rules which specify an agent's intentions and
07000 beliefs. In a given situation does a person as an agent recognize,
07100 monitor and control what he is doing or trying to do? Or does it
07200 just happen to him automatically without conscious deliberation?
07300 This question raises a third view, namely that unrecognized
07400 reasons, aspects of the symbolic representation which are sealed off
07500 from reflective deliberation, can function like mechanical causes in
07600 that they are inaccessible to voluntary control. If they can be
07700 brought to consciousness, such reasons can sometimes be modified
07800 voluntarily by the agent, as a language user, by reflexively talking
07900 to and instructing himself. This second-order monitoring and control
08000 through language contrasts with an agent's inability to modify
08100 mechanical causes or symbolic reasons which lie beyond the influence
08200 of self-criticism and self-emancipation carried out through
08300 linguistically mediated argumentation. Timeworn conundrums about
08400 concepts of free-will, determinism, responsibility, consciousness and
08500 the powers of mental action here plague us unless we can take
08600 advantage of a computer analogy in which a clear and useful
08700 distinction is drawn between levels of mechanical hardware and
08800 symbolically represented programs. This important distinction will be
08900 elaborated on shortly.
09000
09100 Each of these three views provides a serviceable perspective
09200 depending on how a disorder is to be explained and corrected. When
09300 paranoid processes occur during amphetamine intoxication they can be
09400 viewed as biochemically caused and beyond the patient's ability to
09500 control volitionally through internal self-correcting dialogues with
09600 himself. When a paranoid moment occurs in a normal person, it can be
09700 viewed as involving a symbolic misinterpretation. If the paranoid
09800 misinterpretation is recognized as unjustified, a normal person has
09900 the emancipatory power to revise or reject it through internal
10000 debate. Between these extremes of drug-induced paranoid states and
10100 the self-correctible paranoid moments of the normal person, lie cases
10200 of paranoid personalities paranoid reactions and the paranoid mode
10300 associated with the major psychoses (schizophrenic and
10400 manic-depressive).
10500 One opinion has it that the major psychoses are a consequence
10600 of unknown physical causes and are beyond deliberate voluntary
10700 control. But what are we to conclude about paranoid personalities
10800 and paranoid reactions where no hardware disorder is detectable or
10900 suspected? Are such persons to be considered patients to whom
11000 something is mechanically happening at the physical level or are they
11100 agents whose behavior is a consequence of what they do at the
11200 symbolic level? Or are they both agent and patient depending on on
11300 how one views the self-modifiability of their symbolic processing?
11400 In these perplexing cases we shall take the position that in normal,
11500 neurotic and characterological paranoid modes, the psychopathlogy
11600 represents something that happens to a man as a consequence of what
11700 he has experientially undergone, of something he now does, and
11800 something he now undergoes. Thus he is both agent and victim whose
11900 symbolic processes have powers to do and liabilities to undergo.
12000 His liabilities are reflexive in that he is victim to, and can
12100 succumb to, his own symbolic structures.
12200
12300 From this standpoint I would postulate a duality at the
12400 symbolic level between reasons and causes. That is, a reason can
12500 operate as an unrecognized cause in one context and be offered as a
12600 recognized justification in another. It is, of course, not reasons
12700 themselves which operate as causes but the execution of the
12800 reason-rules which serves as a determinant of behavior. Human
12900 symbolic behavior is non-determinate to the extent that it is
13000 self-determinate. Thus the power to select among alternatives, to
13100 make some decisions freely and to change one's mind is non-illusory.
13200 When a reason is recognized to function as a cause and is accessible
13300 to self-monitoring (the monitoring of monitoring), emancipation from
13400 it can occur through change or rejection of belief. In this sense an
13500 at least two-levelled system is self-changeable and
13600 self-emancipatory, within limits.
13700 Explanations both in terms of causes and reasons can be
13800 indefinitely extended and endless questions can be asked at each
13900 level of analysis. Just as the participants in explanatory dialogues
14000 decide what is taken to be problematic, so they also determine the
14100 termini of questions and answers. Each discipline has its
14200 characteristic stopping points and boundaries.
14300 Underlying such explanatory dialogues are larger and smaller
14400 constellations of concepts which are taken for granted as
14500 nonproblematic background. Hence in considering the strategies of
14600 the paranoid mode "it goes without saying" that any living teleonomic
14700 system ,as the larger constellation , strives for maintenance and
14800 expansion of life. Also it should go without saying that, at a lower
14900 level, ion transport takes place through nerve-cell membranes. Every
15000 function of an organism can be viewed a governing a subfunction
15100 beneath and depending on a transfunction above which calls it into
15200 play for a purpose.
15300 Just as there are many alternative ways of describing, there
15400 are many alternative ways of explaining. An explanation is geared to
15500 some level of what the dialogue participants take to be the
15600 fundamental structures and processes under consideration. Since in
15700 psychiatry we cope with patients' problems using mainly
15800 symbolic-conceptual techniques,(it is true that the pill, the knife,
15900 and electricity are also available.), we are interested in aspects of
16000 human conduct which can be explained, understood, and modified at a
16100 symbol-processing level. Psychiatrists need theoretical symbolic
16200 systems from which their clinical experience can be logically derived
16300 to interpret the case histories of their patients. Otherwise they are
16400 faced with mountains indigestible data and dross. To quote Einstein:
16500 "Science is an attempt to make the chaotic diversity of our sense
16600 experience correspond to a logically uniform system of thought by
16700 correlating single experiences with the theoretic structure."
16800
16900 The Symbol Processing Viewpoint
17000
17100 Segments and sequences of human behavior can be studied from
17200 many perspectives. In this monograph I shall view sequences of
17300 paranoid symbolic behavior from an information processing standpoint
17400 in which persons are viewed as symbol users. For a more complete
17500 explication and justification of this perspective , see Newell (1973)
17600 and Newell and Simon (1972).
17700 In brief, from this vantage point we define information as
17800 knowledge in a symbolic code. Symbols are considered to be
17900 representations of experience classified as objects, events,
18000 situations and relations. A symbolic process is a symbol-manipulating
18100 activity posited to account for observable symbolic behavior such as
18200 linguistic interaction. Under the term "symbol-processing" I include
18300 the seeking, manipulating and generating of symbols.
18400 Symbol-processing explanations postulate an underlying
18500 structure of hypothetical processes, functions, strategies, or
18600 directed symbol-manipulating procedures, having the power to produce
18700 and being responsible for observable patterns of phenomena. Such a
18800 structure offers an ethogenic (ethos = conduct or character, genic =
18900 generating) explanation for sequences or segments of symbolic
19000 behavior. (See Harre and Secord,1972). From an ethogenic viewpoint,
19100 we can posit processes, functions, procedures and strategies as being
19200 responsible for and having the power to generate the symbolic
19300 patterns and sequences characteristic of the paranoid mode.
19400 "Strategies" is perhaps the best general term since it implies ways
19500 of obtaining an objective - ways which have suppleness and pliability
19600 since choice of application depends on circumstances. However
19700 I shall use all these terms interchangeably.
19800
19900 Symbolic Models
20000 Theories and models share many functions and are often
20100 considered equivalent. One important distinction lies in the fact
20200 that a theory states a subject has a certain structure but does not
20300 exhibit that structure in itself. (See Kaplan,1964). In the case of
20400 computer simulation models there exists a further useful distinction.
20500 Computer simulation models which have the ability to converse in
20600 natural language using teletypes, actualize or realize a theory in
20700 the form of a dialogue algorithm. In contrast to a verbal, pictorial
20800 or mathematical representation, such a model, as a result of
20900 interaction, changes its states over time and ends up in a state
21000 different from its initial state.
21100 Einstein once remarked, in contrasting the act of description
21200 with what is described, that it is not the function of science to
21300 give the taste of the soup. Today this view would be considered
21400 unnecessarily restrictive. For example, a major test for synthetic
21500 insulin is whether it reproduces the effects, or at least some of the
21600 effects (such as lowering blood sugar), shown by natural insulin.
21700 To test whether a simulation is successful, its effects must be
21800 compared with the effects produced by the naturally-occuring
21900 subject-process being modelled. An interactive simulation model
22000 which attempts to reproduce sequences of experienceable reality,
22100 offers an interviewer a first-hand experience with a concrete case.
22200 In constructing a
22300 computer simulation, a theory is modelled to discover a sufficiently
22400 rich structure of hypotheses and assumptions to generate the
22500 observable subject-behavior under study. A dialogue algorithm
22600 allows an observer to interact with a concrete specimen of a class in
22700 detail. In the case of our model, the level of detail is the level of
22800 the symbolic behavior of conversational language. This level is
22900 satisfying to a clinician since he can compare the model's behavior
23000 with its natural human counterparts using familiar skills of clinical
23100 dialogue. Communicating with the paranoid model by means of teletype,
23200 an interviewer can directly experience for himself a sample of the
23300 type of impaired social relationship which develops with someone in
23400 paranoid mode.
23500 An algorithm composed of symbolic computational procedures
23600 converts input symbolic structures into output symbolic structures
23700 according to certain principles. The modus operandi of such a
23800 symbolic model is simply the workings of an algorithm when run on a
23900 computer. At this level of explanation, to answer `why?' means to
24000 provide an algorithm which makes explicit how symbolic structures
24100 collaborate, interplay and interlock - in short, how they are
24200 organized to generate patterns of manifest phenomena.
24300
24400 To simulate the sequential input-output behavior of a system
24500 using symbolic computational procedures, one writes an alogorithm
24600 which, when run on a computer, produces symbolic behavior resembling
24700 that of the subject system being simulated. (Colby,1973) The
24800 resemblance is achieved through the workings of an inner posited
24900 structure in the form of an algorithm, an organization of
25000 symbol-manipulating procedures which are ethogenically responsible
25100 for the characteristic observable behavior at the input-output level.
25200 Since we do not know the structure of the "real" simulative processes
25300 used by the mind-brain, our posited structure stands as an imagined
25400 theoretical analogue, a possible and plausible organization of
25500 processes analogous to the unknown processes and serving as an
25600 attempt to explain the workings of the system under study. A
25700 simulation model is thus deeper than a pure black-box explanation
25800 because it postulates functionally equivalent processes inside the
25900 box to account for outwardly observable patterns of behavior. A
26000 simulation model constitutes an interpretive explanation in that it
26100 makes intelligible the connections between external input, internal
26200 states and output by positing intervening symbol-processing
26300 procedures operating between symbolic input and symbolic output. To
26400 be illuminating, a description of the model should make clear why and
26500 how it reacts as it does under various circumstances.
26600 Citing a universal generalization to explain an individual's
26700 behavior is unsatisfactory to a questioner who is interested in what
26800 powers and liabilities are latent behind manifest phenomena. To say
26900 "x is nasty because x is paranoid and all paranoids are nasty" may be
27000 relevant, intelligible and correct. But another type of explanation
27100 is possible: a model-explanation referring to a structure which can
27200 account for "nasty" behavior as a consequence of input and internal
27300 states of a system. A model explanation specifies particular
27400 antecedants and processes through which antecedants generate the
27500 phenomena. An ethogenic approach to explanation assumes perceptible
27600 phenomena display the regularities and nonrandom irregularities they
27700 do because of the nature of an imperceptible and inaccessible
27800 underlying structure. The posited theoretical structure is an
27900 idealization, unobservable in human heads, not because it is too
28000 small, but because it is an imaginary analogue to the inaccessible
28100 structure.
28200 When attempts are made to explain human behavior, principles
28300 in addition to those accounting for the natural order are invoked.
28400 "Nature entertains no opinions about us", said Nietzsche. But human
28500 natures do , and therein lies a source of complexity for the
28600 understanding of human conduct. Until the first quarter of the 20th
28700 century, natural sciences were guided by the Newtonian ideal of
28800 perfect process knowledge about inanimate objects whose behavior
28900 could be subsumed under lawlike generalizations. When a deviation
29000 from a law was noticed, it was the law which was subsequently
29100 modified, since by definition physical objects did not have the power
29200 to break laws. When the planet Mercury was observed to deviate from
29300 the orbit predicted by Newtonian theory, no one accused the planet of
29400 being an intentional agent disobeying a law. Instead it was suspected
29500 that something was incorrect about the theory.
29600 Subsumptive explanation is the acceptable norm in many fields
29700 but it is seldom satisfactory in accounting for particular sequences
29800 of behavior in living purposive systems. When physical bodies fall
29900 in the macroscopic world, few find it scientifically useful to post
30000 that bodies have an intention to fall . But in the case of living
30100 systems, especially ourselves, our ideal explanatory practice is
30200 teleonomically Aristotelian in utilizing a concept of intention.
30300 Consider a man participating in a high-diving contest. In
30400 falling towards the water he accelerates at the rate of 32 feet per
30500 second. Viewing the man simply as a falling body, we explain his rate
30600 of fall by appealing to a physical law. Viewing the man as a human
30700 intentionalistic agent, we explain his dive as the result of an
30800 intention to dive in a cetain way in order to win the diving contest.
30900 His conduct (in contrast to mere movement) involves an intended
31000 following of certain conventional rules for what is judged by humans
31100 to constitute, say, a swan dive. Suppose part-way down he chooses to
31200 change his position in mid-air and enter the water thumbing his nose
31300 at the judges. He cannot disobey the law of falling bodies but he can
31400 disobey or ignore the rules of diving. He can also make a gesture
31500 which expresses disrespect and which he believes will be interpreted
31600 as such by the onlookers. Our diver breaks a rule for diving but
31700 follows another rule which prescribes gestural action for insulting
31800 behavior. To explain the actions of diving and nose-thumbing, we
31900 would appeal, not to laws of natural order, but to an additional
32000 order, to principles of human order. This order is superimposed on
32100 laws of natural order and takes into account (1)standards of
32200 appropriate action in certain situations and (2) the agent's inner
32300 considerations of intention, belief and value which he finds
32400 compelling from his point of view. In this type of explanation the
32500 explanandum, that which is being explained, is the agent's informed
32600 actions, not simply his movements. When a human agent performs an
32700 action in a situation, we can ask: is the action appropriate to that
32800 situation and if not, why did the agent believe his action to be
32900 called for?
33000 Symbol-processing explanations of human conduct rely on
33100 concepts of intention, belief, action, affect, etc. These terms are
33200 close to the terms of ordinary language as is characteristic of early
33300 stages of explanations. It is also important to note that such terms
33400 are commonly utilized in describing computer algorithms which follow
33500 rules in striving to achieve goals. In an algorithm these ordinary
33600 language terms can be explicitly defined and represented.
33700 Psychiatry deals with the practical concerns of inappropriate
33800 action, belief, etc. on the part of a patient. His behavior may be
33900 inappropriate to onlookers since it represents a lapse from the
34000 expected, a contravention of the human order. It may even appear this
34100 way to the patient in monitoring and directing himself. But
34200 sometimes, as in severe cases of the paranoid mode, the patient's
34300 behavior does not appear anomalous to himself. He maintains that
34400 anyone who understands his point of view, who conceptualizes
34500 situations as he does from the inside, would consider his outward
34600 behavior appropriate and justified. What he does not understand or
34700 accept is that his inner conceptualization is mistaken and represents
34800 a misinterpretation of the events of his experience.
34900 The model to be presented in the sequel constitutes an
35000 attempt to explain some regularities and particular occurrences of
35100 symbolic (conversational) paranoid behavior observable in the
35200 clinical situation of a psychiatric interview. The explanation is
35300 at the symbol-processing level of linguistically communicating agents
35400 and is cast in the form of a dialogue algorithm. Like all
35500 explanations, it is tentative, incomplete, and does not claim to
35600 represent the only conceivable structure of processes .
35700
35800 The Nature of Algorithms
35900
36000 Theories can be presented in various forms: prose essays,
36100 mathematical equations and computer programs. To date most
36200 theoretical explanations in psychiatry and psychology have consisted
36300 of natural language essays with all their well-known vagueness and
36400 ambiguities. Many of these formulations have been untestable, not
36500 because relevant observations were lacking but because it was unclear
36600 what the essay was really saying. Clarity is needed. Science may
36700 begin with metaphors but it should end up with algorithms.
36800 An alternative way of formulating psychological theories is
36900 now available in the form of symbol-processing algorithms, computer
37000 programs, which have the virtue of being explicit in their
37100 articulation and which can be run on a computer to test internal
37200 consistency and external correspondence with the data of observation.
37300 The subject-matter or subject of a model is what it is a model of;
37400 the source of a model is what it is based upon. Since we do not know
37500 the "real" algorithms used by people, we construct a theoretical
37600 model, based upon computer algorithms. This model represents a
37700 partial analogy. (Harre, 1970). The analogy is made at the symbol-
37800 processing level, not at the hardware level. A functional,
37900 computational or procedural equivalence is being postulated. The
38000 question then becomes one of categorizing the extent of the
38100 equivalence. A beginning (first-approximation) functional
38200 equivalence might be defined as indistinguishability at the level of
38300 observable I-O pairs. A stronger equivalence would consist of
38400 indistinguishability at inner I-O levels. That is, there exists a
38500 correspondence between what is being done and how it is being done at
38600 a given operational level.
38700 An algorithm represents an organization of symbol-processing
38800 strategies or functions which represent an "effective procedure". An
38900 effective procedure consists of three compoments:
39000
39100 (1) A programming language in which procedural rules of
39200 behavior can be rigorously and unambiguously specified.
39300 (2) An organization of procedural rules which constitute
39400 the algorithm.
39500 (3) A machine processor which can rapidly and reliably carry
39600 out the processes specified by the procedural rules.
39700 The specifications of (2), written in the formally defined
39800 programming language of (1), is termed an algorithm or program
39900 whereas (3) involves a computer as the machine processor, a set of
40000 deterministic physical mechanisms which can perform the operations
40100 specified in the algorithm. The algorithm is called `effective'
40200 because it actually works, performing as intended and producing the
40300 effects desired bt the model builders when run on the machine
40400 processor.
40500 A simulation model is composed of procedures taken to be
40600 analogous to the imperceptible and inaccessible procedures. We
40700 are not claiming they ARE analogous, we are MAKING them so. The
40800 analogy being drawn here is between specified processes and their
40900 generating systems. Thus, in comparing mental processes to
41000 computational processes, we might assert:
41100
41200 mental process computational process
41300 --------------:: ----------------------
41400 brain hardware computer hardware and
41500 and programs programs
41600
41700 Many of the classiclal mind-brain problems arose because
41800 there did not exist a familiar, well-understood analogy to help
41900 people imagine how a system could work having a clear separation
42000 between its hardware descriptions and its program descriptions. With
42100 the advent of computers and programs some mind-brain perplexities
42200 disappear. (Colby,1971). The analogy is not simply between computer
42300 hardware and brain wetware. We are not comparing the structure of
42400 neurons with the structure of transistors; we are comparing the
42500 organization of symbol-processing procedures in an algorithm with
42600 symbol-processing procedures of the mind-brain. The central nervous
42700 system contains a representation of the experience of its holder. A
42800 model builder has a conceptual representation of that representation
42900 which he demonstrates in the form of a model. Thus the model is a
43000 demonstration of a representation of a representation.
43100 An algorithm can be run on a computer in two forms, a
43200 compiled version and an interpreted version. In the compiled version
43300 a preliminary translation has been made from the higher-level
43400 programming language (source language) into lower-level machine
43500 language (object language) which controls the on-off state of
43600 hardware switching devices. When the compiled version is run, the
43700 instructions of the machine-language code are directly executed. In
43800 the interpreted version each high-level language instruction is first
43900 translated into machine language, executed, and then the process is
44000 repeated with the next instruction. One important aspect of the
44100 distinction bewteen compiled and interpreted versions is that the
44200 compiled version, now written in machine language, is not easily
44300 accessible to change using the higher-level language. In order to
44400 change the program, the interpreted version must be modified in the
44500 source language and then re-compiled into the object language. The
44600 rough analogy with ever-changing human symbolic behavior lies in
44700 suggesting that modifications require change at the source-language
44800 level. Otherwise compiled algorithms are inaccessible to second order
44900 monitoring and modification.
45000 Since we are taking running computer programs as a source of
45100 analogy for a paranoid model, logical errors or pathological behavior
45200 on the part of such programs are of interest to the
45300 psychopathologist. These errors can be ascribed to the hardware
45400 level, to the interpreter or to the programs which the interpreter
45500 executes. Different remedies are required at different levels. If
45600 the analogy is to be clinically useful in the case of human
45700 pathological behavior, it will become a matter of influencing
45800 symbolic behavior with the appropriate techniques.
45900 Since the algorithm is written in a programming language, it
46000 is hermetic except to a few people, who in general do not enjoy
46100 reading other people's code. Hence the intelligibility and
46200 scrutability requirement for explanations must be met in other ways.
46300 In an attempt to open the algorithm to scrutiny I shall describe the
46400 model in detail using diagrams and interview examples profusely.
46500
46600
46700 Analogy
46800
46900 I have stated that an interactive simulation model of
47000 symbol-manipulating processes reproduces sequences of symbolic
47100 behavior at the level of linguistic communication. The reproduction
47200 is achieved through the operations of an algorithm consisting of an
47300 organization of hypothetical symbol-processing strategies or
47400 procedures which can generate the I-O behavior of the subject-
47500 processes under investigation.The algorithm is an "effective
47600 procedure" in the sense it really works in the manner intended by the
47700 model-builders. In the model to be described, the paranoid algorithm
47800 generates linguistic I-O behavior typical of patients whose
47900 symbol-processing is dominated by the paranoid mode. Comparisons can
48000 be made between samples of the I-O behaviors of patients and model.
48100 But the analogy is not to be drawn at this level. Mynah birds and
48200 tape recorders also reproduce human linguistic behavior but no one
48300 believes the reproduction is achieved by powers analogous to human
48400 powers. Given that the manifest outermost I-O behavior of the model
48500 is indistinguishable from the manifest outward I-O behavior of
48600 paranoid patients, does this imply that the hypothetical underlying
48700 processes used by the model are analogous to (or perhaps the same
48800 as?) the underlying processes used by persons in the paranoid mode?
48900 This deep and far-reaching question should be approached with caution
49000 and only when we are first armed with some clear notions about
49100 analogy, similarity, faithful reproduction, indistinguishability and
49200 functional equivalence.
49300 In comparing two things (objects, systems or processes ) one
49400 can cite properties they have in common (positive analogy),
49500 properties they do not share (negative analogy) and properties which
49600 we do not yet know whether they are positive or negative (neutral
49700 analogy). (See Hesse,1966). No two things are exactly alike in every
49800 detail. If they were identical in respect to all their properties
49900 then they would be copies. If they were identical in every respect
50000 including their spatio-temporal location we would say we have only
50100 one thing instead of two. Everything resembles something else and
50200 maybe everything else, depending upon how one cites properties.
50300 In an analogy a similarity relation is evoked. "Newton did
50400 not show the cause of the apple falling but he showed a similitude
50500 between the apple and the stars."(D`Arcy Thompson). Huygens suggested
50600 an analogy between sound waves and light waves in order to understand
50700 something less well-understood (light) in terms of something better
50800 understood (sound). To account for species variation, Darwin
50900 postulated a process of natural selection. He constructed an
51000 analogy from two sources, one from artificial selection as practiced
51100 by domestic breeders of animals and one from Malthus' theory of a
51200 competition for existence in a population increasing geometrically
51300 while its resources increase arithmetically. Bohr's model of the atom
51400 offered an analogy between solar system and atom. These well-known
51500 historical examples should be sufficient here to illustrate the role
51600 of analogies in theory construction. Analogies are made in respect
51700 to those properties which constitute the positive and neutral
51800 analogy. The negative analogy is ignored. Thus Bohr's model of
51900 the atom as a miniature planetary system was not intended to suggest
52000 that electrons possessed color or that planets jumped out of their
52100 orbits.
52200
52300 Functional Equivalence
52400
52500 When human symbolic processes are the subject of a simulation
52600 model, we draw the analogy from two sources, symbolic computation and
52700 psychology. The analogy made is between systems known to have the
52800 power to process symbols, namely, persons and computers. The
52900 properties compared in the analogy are obviously not physical or
53000 substantive such as blood and wires, but functional and procedural.
53100 We want to assume that not-well-understood mental procedures in a
53200 person are similar to the more accessible and better understood
53300 procedures of symbol-processing which take place in a computer. The
53400 analogy is one of functional or procedural equivalence. (For a
53500 further account of functional analysis see Hempel,1965).
53600 Mousetraps are functionally equivalent. There exists a large set
53700 of physical mechanisms for catching mice. The term "mousetrap" says
53800 what each member the set has in common. Each takes as input a live
53900 mouse and yields as output a dead one. Systems equivalent from one
54000 point of view may not be equivalent from another (Fodor,1968).
54100 If model and human are indistinguishable at the manifest
54200 level of linguistic I-O pairs, then they can be considered equivalent
54300 at that level. If they can be shown to be indistinguishable at
54400 more internal symbolic levels, then a stronger exists. How stringent
54500 and how extensive are the demands for equivalence to be? Must
54600 there be point-to-point correspondences at every level? What is to
54700 count as a point and what are the levels? Procedures can be specified
54800 and ostensively pointed to in an algorithm, but how can we point to
54900 unobservable symbolic processes in a person's head? There is an
55000 inevitable limit to scrutinizing the "underlying" processes of the
55100 world. Einstein likened this situation to a man explaining the
55200 behavior of a watch without opening it: "He will never be able to
55300 compare his picture with the real mechanism and he cannot even
55400 imagine the possibility or meaning of such a comparison".
55500 In constructing an algorithm one puts together an
55600 organization of collaborating functions or procedures. A function
55700 takes some symbolic structure as input and yields some symbolic
55800 structure as output. Two computationally equivalent functions, having
55900 the same input and yielding the same output, can differ `inside' the
56000 function at the instruction level.
56100 Consider an elementary programming problem which students in
56200 symbolic computation are often asked to solve. Given a list L of
56300 symbols, L=(A B C D), as input, construct a function or procedure
56400 which will convert this list to the list RL in which the order of the
56500 symbols is reversed, i.e. RL=(D C B A). There are many ways of
56600 solving this problem and the code of one student may differ greatly
56700 from that of another at the level of individual instructions. But the
56800 differences of such details are irrelevant. What is significant is
56900 that the solutions make the required conversion from L to RL. The
57000 correct solutions will all be computationally equivalent at the
57100 input-output level since they take the same symbolic structures as
57200 input and produce the same symbolic output.
57300 If we propose that an algorithm we have constructed is
57400 functionally equivalent to what goes on in humans when they process
57500 symbolic structures, how can we justify this position ?
57600 Indistinguishability tests at, say, the linguistic level provide
57700 evidence only for beginning equivalence. We would like to be able to
57800 have access to the underlying processes in humans the way we can with
57900 algorithms. (Admittedly, we do not directly observe processes at all
58000 levels but only the products of some). The difficulty lies in
58100 identifying, making accessible, and counting processes in human
58200 heads. Many symbol-processing experiments are now being designed
58300 and carried out. We must have great patience with this type of
58400 experimental information-processing psychology.
58500 In the meantime, besides first-approximation I-O equivalence
58600 and plausibility arguments, one might appeal to extra-evidential
58700 support offering parallelisms from neighboring scientific domains.
58800 One can offer analogies between what is known to go on at a molecular
58900 level in the cells of living organisms and what goes on in an
59000 algorithm. For example, a DNA molecule in the nucleus of a cell
59100 consists of an ordered sequence (list) of nucleotide bases (symbols)
59200 coded in triplets termed codons (words). Each element of the codon
59300 specifies which amino acid during protein synthesis is to be linked
59400 into the chain of polypeptides making up the protein. The codons
59500 function like instructions in a programming language. Some codons are
59600 known to operate as terminal symbols analogous to symbols in an
59700 algorithm which terminate the end of a list. If, as a result of a
59800 mutation, a nucleotide base is changed, the usual protein will not be
59900 synthesized. The polypeptide chain resulting may have lethal or
60000 trivial consequences for the organism depending on what must be
60100 passed on to other processes which require polypeptides to be handed
60200 over to them. Similarly in an algorithm. If a symbol or word in a
60300 procedure is incorrect, the procedure cannot operate in its intended
60400 manner. Such a result may be lethal or trivial to the algorithm
60500 depending on what information the faulty procedure must pass on at
60600 its interface with other procedures in the overall organization. Each
60700 procedure in an algorithm is embedded in an organization of
60800 collaborating procedures just as are functions in living organisms.
60900 We know that at the molecular level of living organisms there exists
61000 a process such as serial progression along a nucleotide sequence,
61100 which is analogous to stepping down a list in an algorithm. Further
61200 analogies can be made between point mutations in which DNA bases can
61300 be inserted, deleted, substituted or reordered and symbolic
61400 computation in which the same operations are commonly carried out on
61500 symbolic structures. Such analogies are interesting as
61600 extra-evidential support but obviously closer linkages are needed
61700 between the macro-level of symbolic processes and the micro-level of
61800 molecular information-processing within cells.
61900 To obtain evidence for the acceptability of a model as true
62000 or authentic, empirical tests are utilized as validation procedures.
62100 Such tests should also tell us which is the best among alternative
62200 versions of a family of models and, indeed among alternative families
62300 of models. Scientific explanations do not stand alone in isolation.
62400 They are evaluated relative to rival contenders for the position of
62500 "best available". Once we accept a theory or model as the best
62600 available, can we be sure it is correct or true? We can never know
62700 with certainty. Theories and models are provisional approximations to
62800 nature destined to become superseded by better ones.